Sights on AI: Expanding Human Perception Through AI – A Conversation with Daisuke Iso
Imaging & Sensing
Sony AI
October 23, 2025
Daisuke Iso is a Staff Research Scientist at Sony AI and leads the Imaging and Sensing Flagship Project. With more than a decade of experience in imaging and sensing technology across academia and industry, his research interests span computational photography, computer vision, and machine learning. He received his B.E., M.E., and Ph.D. degrees in information and computer science from Keio University, and was a visiting scholar at Columbia University from 2011 to 2013. Before joining Sony AI, Daisuke worked at Sony and Toyota Research Institute Advanced Development (TRI-AD), where he gained expertise in imaging R&D and autonomous driving systems.
At Sony AI, Daisuke heads an international team dedicated to revolutionizing how humans and machines perceive and interact with the world. Their research focuses on advancing the imaging and sensing pipeline. The goal is not just to complement but to augment human perception – expanding our ability to make sense of the complex world we inhabit.
In this blog, Daisuke reflects on his career journey, the evolution of AI in imaging, and how his team is pushing the boundaries of what it means to truly “see.”
Can you share a bit about your early inspirations and how you began your career?
I was drawn to computer science because of the creative possibilities it offered. When I joined Sony as a software engineer, I quickly realized that production-level code demanded rigor and precision. The embedded software I encountered in production was far beyond what I had written in university research, and I was impressed by the meticulous attention engineers paid to every detail. Even today, I keep those lessons in mind whenever I write or review code.
In parallel, I worked with diverse partners and product teams, which taught me how to translate research into practical improvements. Those experiences instilled discipline and attention to detail that continue to guide me today.
What led you into imaging and sensing research at Sony?
Early on, I had the chance to work across very different fields, including audio-visual products, imaging R&D, and applied research on sensors. I was particularly energized during the rise of computational photography. I spent time reading the latest research papers, experimenting with new methods, and exploring their applications. That period was exciting because computational photography was one of the hottest fields, and it opened my eyes to the potential of combining algorithms with hardware innovation. I also worked on applied research for image sensors, which was critical for Sony’s competitiveness at the time. These experiences became the foundation for my later AI-based imaging research.
How did your time at TRI-AD broaden your perspective?
At Toyota Research, I focused on camera systems for autonomous driving. That work required balancing strict safety requirements with usability, which gave me a deep respect for reliability and practicality. For example, I worked on defining functional safety requirements for camera systems, then implementing software that could meet those standards in production. Detecting system defects rigorously while keeping usability intact was a constant challenge. I also led the system architecture for remote driving prototypes that were tested at proving grounds. Those experiences sharpened my ability to integrate cutting-edge research with the realities of product development.
What brought you back to Sony, and what excites you about leading Sony AI’s Imaging and Sensing Flagship Project?
The opportunity to combine imaging, sensing, and AI in one role drew me back to Sony. I have always been inspired by the Kaizen philosophy of continuous improvement, and I wanted to bring that mindset to my own teams. Today I lead an international group of 11 researchers across Japan, Israel, Switzerland, and the U.S. We work across four time zones, but we’re united by open communication and a shared sense of purpose. I intentionally built a team with heterogeneous expertise – spanning computational photography, domain adaptation, continual learning, and engineering – so we can cover a wide range of computer vision and machine learning domains.
From your perspective, what have been the most important turning points in AI?
I see two major shifts. The first was autonomy – the transition from rule-based systems to learning-based models after 2012, which completely transformed the field. The second was generalization – when AI tools began to combine functions that used to be handled separately. Twenty years ago, I never imagined AI would become such a widely applicable, general-purpose technology.
How have sensors and semiconductors shaped AI’s progress?
They have played a pivotal role. Smartphones made high-quality image capture widely available, generating massive datasets for training AI. At the same time, breakthroughs in sensor resolution and usability accelerated AI research further. Advances in imaging and in AI have reinforced one another in powerful ways.
What is your team at Sony AI working on today?
Our goal is to maximize imaging and sensing capabilities using AI. Traditional RGB pipelines are optimized for human perception, and often discards information that could be critical for specific applications, such as scientific analysis or autonomous navigation. We are creating AI-optimized pipelines for computer vision. This allows us to preserve and enhance information that is critical for sensing applications, making it possible to extract precise, application-specific insights from any scene.
Some of our focus areas include: improving image quality in challenging conditions, such as low light or adverse weather, where we’ve achieved state-of-the-art results; developing AI-optimized preprocessing that boosts recognition accuracy by several points; creating novel data augmentation methods, like RGB-to-RAW conversion, that strengthen training sets; and designing line-wise AI processing that can run directly on sensors with strict memory constraints while maintaining near-original model accuracy. By addressing these challenges, we strengthen Sony’s leadership in image sensors while also creating real-world impact for customers.
Looking ahead, what opportunities and challenges do you see for imaging and sensing AI?
Generative AI, combined with increasing compute power, has accelerated progress in the past few years. I am excited about the potential to combine generative AI with image capture to create next-generation imaging systems. For example, while people typically focus on the main subject of an image, generative AI could fill in fine background details in a way that reduces hardware requirements while still producing high-quality, photo-realistic outputs. This synergy between image generation and image capture could fundamentally reshape what “cameras” look like in the future. At the same time, fidelity remains a challenge. Generative AI can still produce hallucinations, and preserving accuracy is essential for real-world applications.
How would you describe your leadership style and the culture at Sony AI?
Sony AI is unique in the way it respects ideas and accepts failure as part of innovation. I believe innovation happens when ideas are allowed to grow, be combined, and evolve. As a leader, I try to provide autonomy, foster open communication, and encourage idea generation across disciplines.
How do you see AI shaping creativity in the future?
I believe AI should expand, not replace, human creativity. Advancements in sensors already allow us to capture fast-moving objects at night, but many challenges remain. I am inspired by the possibility that AI could one day allow creators to capture scenes in even more extreme conditions. Our research can help give photographers and other creators new freedom – tools to express ideas and moments that were once impossible.
Latest Blog
October 20, 2025 | Imaging & Sensing, Sony AI
New Research at ICCV 2025: Expanding the Boundaries of Vision and Generative AI
At ICCV 2025, Sony AI is presenting six new research contributions that advance both generative modeling and computer vision. From parameter-efficient fine-tuning, to rethinking ho…
October 1, 2025 | Sony AI
Advancing AI: Highlights from September
At Sony AI, each month is a chance to share how our research, collaborations, and stories are shaping the field of artificial intelligence. September brought together music, vision…
September 29, 2025 | Sony AI, Events
From Editing to Mastering: AI Research Insights at ISMIR 2025
At ISMIR 2025 in Daejeon, South Korea, Sony AI and its collaborators presented four new research projects that explore how AI can support music creators and producers. From editing…



